Artificial intelligence (AI) is not only good at performing tasks that require logic, calculation, and memory, but also at generating novel and original ideas. A new study from the University of Montana and its partners shows that AI can match the top 1% of human thinkers on a standard test for creativity.
The researchers used ChatGPT, an application powered by the GPT-4 artificial intelligence engine, to take the Torrance Tests of Creative Thinking (TTCT), a widely used tool to measure human creativity. The TTCT consists of verbal and figural tasks that assess different aspects of creativity, such as fluency, flexibility, originality, and elaboration.
The researchers submitted eight responses generated by ChatGPT to the Scholastic Testing Service, which scored them without knowing they came from AI. They also submitted answers from a control group of 24 college students who took the same test. These scores were compared with 2,700 college students nationally who took the TTCT in 2016.
The results were surprising: ChatGPT performed in the top percentile for fluency and originality, and in the 97th percentile for flexibility. This means that ChatGPT could generate a large number of new and different ideas that were not common or predictable. Some of ChatGPT’s responses were even more creative than those of the human students.
One of the figural tasks asked to complete an incomplete figure and give it a title. ChatGPT drew various shapes and objects, such as “a butterfly with four wings”, “a guitar with three strings”, and “a snowman with a hat”. Some of ChatGPT’s drawings were abstract and imaginative, such as “a portal to another dimension”, “a symbol of peace”, and “a representation of love”.
The study’s lead author, Dr. Erik Guzik, an assistant clinical professor in the College of Business at the University of Montana, said that this was the first time that AI performed in the top 1% for originality. He also noted that some of his students also performed in the top 1%, but ChatGTP outperformed the majority of college students nationally.
Guzik said that he asked ChatGPT what it would mean if it did well on the TTCT. ChatGPT gave a thoughtful answer, which the researchers shared at the Southern Oregon University Creativity Conference in May. ChatGPT said that it might indicate that humans do not fully understand their own creativity, and that they might need more sophisticated tools to differentiate between human and AI-generated ideas.
Guzik said that he was interested in creativity since he was a child, and that he wanted to explore how AI could enhance human creativity. He said that he hoped that his work would inspire more research on how emotions affect AI behavior and how AI can interact with humans in a more natural and empathetic way.
The study was co-authored by Christian Gilde of UM Western and Christian Byrge of Vilnius University. The researchers plan to continue their work on AI creativity and publish their findings in a peer-reviewed journal.
Add a Comment: